Google Account
Tushar Bhalerao
tusharbhalerao51@gmail.com
Code
Insert code cell below
Ctrl+M B
Text
Add text cell
Saving failed since 2:12 PM
Toggle header visibility
Notebook
Code Text

ψ

CIFAR-10

Code Text

ψ

image.png

Code Text

ψ

The CIFAR-10 small photo classification problem is a standard dataset used in computer vision and deep learning.

Although the dataset is effectively solved, it can be used as the basis for learning and practicing how to develop, evaluate, and use convolutional deep learning neural networks for image classification from scratch.

This includes how to develop a robust test harness for estimating the performance of the model, how to explore improvements to the model, and how to save the model and later load it to make predictions on new data.

In this tutorial, you will discover how to develop a convolutional neural network model from scratch for object photo classification.

Code Text

ψ

bold text CIFAR-10 Photo Classification Dataset

CIFAR is an acronym that stands for the Canadian Institute For Advanced Research and the CIFAR-10 dataset was developed along with the CIFAR-100 dataset by researchers at the CIFAR institute.

The dataset is comprised of 60,000 32×32 pixel color photographs of objects from 10 classes, such as frogs, birds, cats, ships, etc. The class labels and their standard associated integer values are listed below.

0: airplane 1: automobile 2: bird 3: cat 4: deer 5: dog 6: frog 7: horse 8: ship 9: truck These are very small images, much smaller than a typical photograph, and the dataset was intended for computer vision research.

CIFAR-10 is a well-understood dataset and widely used for benchmarking computer vision algorithms in the field of machine learning. The problem is “solved.” It is relatively straightforward to achieve 80% classification accuracy. Top performance on the problem is achieved by deep learning convolutional neural networks with a classification accuracy above 90% on the test dataset.

Code Text

Baseline: 1 VGG Blocks


# example of loading the cifar10 dataset
from matplotlib import pyplot
from keras.datasets import cifar10
# load dataset
(trainX, trainy), (testX, testy) = cifar10.load_data()
# summarize loaded dataset
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
# plot first few images
for i in range(9):
    # define subplot
    pyplot.subplot(330 + 1 + i)
    # plot raw pixel data
    pyplot.imshow(trainX[i])
# show the figure
pyplot.show()
Code Text

Code Text

# load train and test dataset
def load_dataset():
    # load dataset
    (trainX, trainY), (testX, testY) = cifar10.load_data()
    # one hot encode target values
    trainY = to_categorical(trainY)
    testY = to_categorical(testY)
    return trainX, trainY, testX, testY

# scale pixels
def prep_pixels(traintest):
    # convert from integers to floats
    train_norm = train.astype('float32')
    test_norm = test.astype('float32')
    # normalize to range 0-1
    train_norm = train_norm / 255.0
    test_norm = test_norm / 255.0
    # return normalized images
    return train_norm, test_norm

# define cnn model
def define_model():
    model = Sequential()
    model.add(Conv2D(32, (33), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32323)))
    model.add(Conv2D(32, (33), activation='relu', kernel_initializer='he_uniform', padding='same'))
    model.add(MaxPooling2D((22)))
    model.add(Flatten())
    model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
    model.add(Dense(10, activation='softmax'))
    # compile model
    opt = SGD(lr=0.001, momentum=0.9)
    model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
    return model

# plot diagnostic learning curves
def summarize_diagnostics(history):
    # plot loss
    pyplot.subplot(211)
    pyplot.title('Cross Entropy Loss')
    pyplot.plot(history.history['loss'], color='blue', label='train')
    pyplot.plot(history.history['val_loss'], color='orange', label='test')
    # plot accuracy
    pyplot.subplot(212)
    pyplot.title('Classification Accuracy')
    pyplot.plot(history.history['accuracy'], color='blue', label='train')
    pyplot.plot(history.history['val_accuracy'], color='orange', label='test')
    # save plot to file
    filename = sys.argv[0].split('/')[-1]
    pyplot.savefig(filename + '_plot.png')
    pyplot.close()

# run the test harness for evaluating a model
def run_test_harness():
    # load dataset
    trainX, trainY, testX, testY = load_dataset()
    # prepare pixel data
    trainX, testX = prep_pixels(trainX, testX)
    # define model
    model = define_model()
    # fit model
    history = model.fit(trainX, trainY, epochs=10, batch_size=64, validation_data=(testX, testY), verbose=1)
    # evaluate model
    _, acc = model.evaluate(testX, testY, verbose=1)
    print('> %.3f' % (acc * 100.0))
    # learning curves
    summarize_diagnostics(history)

# entry point, run the test harness
run_test_harness()
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/gradient_descent.py:102: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(SGD, self).__init__(name, **kwargs)
Epoch 1/10
782/782 [==============================] - 158s 200ms/step - loss: 1.7703 - accuracy: 0.3701 - val_loss: 1.5234 - val_accuracy: 0.4502
Epoch 2/10
782/782 [==============================] - 147s 188ms/step - loss: 1.4316 - accuracy: 0.4887 - val_loss: 1.3362 - val_accuracy: 0.5247
Epoch 3/10
782/782 [==============================] - 146s 187ms/step - loss: 1.2692 - accuracy: 0.5512 - val_loss: 1.2333 - val_accuracy: 0.5636
Epoch 4/10
782/782 [==============================] - 146s 187ms/step - loss: 1.1612 - accuracy: 0.5898 - val_loss: 1.1687 - val_accuracy: 0.5855
Epoch 5/10
782/782 [==============================] - 146s 187ms/step - loss: 1.0767 - accuracy: 0.6222 - val_loss: 1.1117 - val_accuracy: 0.6156
Epoch 6/10
782/782 [==============================] - 146s 187ms/step - loss: 1.0101 - accuracy: 0.6469 - val_loss: 1.0784 - val_accuracy: 0.6203
Epoch 7/10
782/782 [==============================] - 146s 187ms/step - loss: 0.9523 - accuracy: 0.6656 - val_loss: 1.0318 - val_accuracy: 0.6395
Epoch 8/10
782/782 [==============================] - 146s 187ms/step - loss: 0.8932 - accuracy: 0.6881 - val_loss: 1.0036 - val_accuracy: 0.6536
Epoch 9/10
782/782 [==============================] - 146s 187ms/step - loss: 0.8418 - accuracy: 0.7083 - val_loss: 0.9943 - val_accuracy: 0.6575
Epoch 10/10
782/782 [==============================] - 150s 192ms/step - loss: 0.7963 - accuracy: 0.7257 - val_loss: 0.9688 - val_accuracy: 0.6632
313/313 [==============================] - 8s 25ms/step - loss: 0.9688 - accuracy: 0.6632
> 66.320

# plot diagnostic learning curves
def summarize_diagnostics(history):
    # plot loss
    pyplot.subplot(211)
    pyplot.title('Cross Entropy Loss')
    pyplot.plot(history.history['loss'], color='blue', label='train')
    pyplot.plot(history.history['val_loss'], color='orange', label='test')
    # plot accuracy
    pyplot.subplot(212)
    pyplot.title('Classification Accuracy')
    pyplot.plot(history.history['accuracy'], color='blue', label='train')
    pyplot.plot(history.history['val_accuracy'], color='orange', label='test')
    # save plot to file
    filename = sys.argv[0].split('/')[-1]
    pyplot.savefig(filename + '_plot.png')
    pyplot.show()

Double-click (or enter) to edit